23 research outputs found

    Statistical Methods For Truncated Survival Data

    Get PDF
    {Truncation is a well-known phenomenon that may be present in observational studies of time-to-event data. For example, autopsy-confirmed survival studies of neurodegenerative diseases are subject to selection bias due to the simultaneous presence of left and right truncation, also known as double truncation. While many methods exist to adjust for either left or right truncation, there are very few methods that adjust for double truncation. When time-to-event data is doubly truncated, the regression coefficient estimators from the standard Cox regression model will be biased. In this dissertation, we develop two novel methods to adjust for double truncation when fitting the Cox regression model. The first method uses a weighted estimating equation approach. This method assumes the survival and truncation times are independent. The second method relaxes this independence assumption to an assumption of conditional independence between the survival and truncation times. As opposed to methods that ignore truncation, we show that both proposed methods result in consistent and asymptotically normal regression coefficient estimators and have little bias in small samples. We use these proposed methods to assess the effect of cognitive reserve on survival in individuals with autopsy-confirmed Alzheimer’s disease. We also conduct an extensive simulation study to compare survival distribution function estimators in the presence of double truncation and conduct a case study to compare the survival times of individuals with autopsy-confirmed Alzheimer’s disease and frontotemporal lobar degeneration. Furthermore, we introduce an R-package for the above methods to adjust for double truncation when fitting the Cox model and estimating the survival distribution function

    Ontologies Applied in Clinical Decision Support System Rules:Systematic Review

    Get PDF
    BackgroundClinical decision support systems (CDSSs) are important for the quality and safety of health care delivery. Although CDSS rules guide CDSS behavior, they are not routinely shared and reused. ObjectiveOntologies have the potential to promote the reuse of CDSS rules. Therefore, we systematically screened the literature to elaborate on the current status of ontologies applied in CDSS rules, such as rule management, which uses captured CDSS rule usage data and user feedback data to tailor CDSS services to be more accurate, and maintenance, which updates CDSS rules. Through this systematic literature review, we aim to identify the frontiers of ontologies used in CDSS rules. MethodsThe literature search was focused on the intersection of ontologies; clinical decision support; and rules in PubMed, the Association for Computing Machinery (ACM) Digital Library, and the Nursing & Allied Health Database. Grounded theory and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 guidelines were followed. One author initiated the screening and literature review, while 2 authors validated the processes and results independently. The inclusion and exclusion criteria were developed and refined iteratively. ResultsCDSSs were primarily used to manage chronic conditions, alerts for medication prescriptions, reminders for immunizations and preventive services, diagnoses, and treatment recommendations among 81 included publications. The CDSS rules were presented in Semantic Web Rule Language, Jess, or Jena formats. Despite the fact that ontologies have been used to provide medical knowledge, CDSS rules, and terminologies, they have not been used in CDSS rule management or to facilitate the reuse of CDSS rules. ConclusionsOntologies have been used to organize and represent medical knowledge, controlled vocabularies, and the content of CDSS rules. So far, there has been little reuse of CDSS rules. More work is needed to improve the reusability and interoperability of CDSS rules. This review identified and described the ontologies that, despite their limitations, enable Semantic Web technologies and their applications in CDSS rules

    A Systematic Approach to Configuring MetaMap for Optimal Performance

    Get PDF
    Background  MetaMap is a valuable tool for processing biomedical texts to identify concepts. Although MetaMap is highly configurative, configuration decisions are not straightforward. Objective  To develop a systematic, data-driven methodology for configuring MetaMap for optimal performance. Methods  MetaMap, the word2vec model, and the phrase model were used to build a pipeline. For unsupervised training, the phrase and word2vec models used abstracts related to clinical decision support as input. During testing, MetaMap was configured with the default option, one behavior option, and two behavior options. For each configuration, cosine and soft cosine similarity scores between identified entities and gold-standard terms were computed for 40 annotated abstracts (422 sentences). The similarity scores were used to calculate and compare the overall percentages of exact matches, similar matches, and missing gold-standard terms among the abstracts for each configuration. The results were manually spot-checked. The precision, recall, and F-measure ( β =1) were calculated. Results  The percentages of exact matches and missing gold-standard terms were 0.6–0.79 and 0.09–0.3 for one behavior option, and 0.56–0.8 and 0.09–0.3 for two behavior options, respectively. The percentages of exact matches and missing terms for soft cosine similarity scores exceeded those for cosine similarity scores. The average precision, recall, and F-measure were 0.59, 0.82, and 0.68 for exact matches, and 1.00, 0.53, and 0.69 for missing terms, respectively. Conclusion  We demonstrated a systematic approach that provides objective and accurate evidence guiding MetaMap configurations for optimizing performance. Combining objective evidence and the current practice of using principles, experience, and intuitions outperforms a single strategy in MetaMap configurations. Our methodology, reference codes, measurements, results, and workflow are valuable references for optimizing and configuring MetaMap

    Modelling the impact of presemester testing on COVID-19 outbreaks in university campuses

    No full text
    Objectives Universities are exploring strategies to mitigate the spread of COVID-19 prior to reopening their campuses. National guidelines do not currently recommend testing students prior to campus arrival. However, the impact of presemester testing has not been studied.Design Dynamic SARS-CoV-2 transmission models are used to explore the effects of three presemester testing interventions.Interventions Testing of students 0, 1 and 2 times prior to campus arrival.Primary outcomes Number of active infections and time until isolation bed capacity is reached.Setting We set on-campus and off-campus populations to 7500 and 17 500 students, respectively. We assumed 2% prevalence of active cases at the semester start, and that one-third of infected students will be detected and isolated throughout the semester. Isolation bed capacity was set at 500. We varied disease transmission rates (R0=1.5, 2, 3, 4) to represent the effectiveness of mitigation strategies throughout the semester.Results Without presemester screening, peak number of active infections ranged from 4114 under effective mitigation strategies (R0=1.5) to 10 481 under ineffective mitigation strategies (R0=4), and exhausted isolation bed capacity within 10 (R0=4) to 25 days (R0=1.5). Mandating at least one test prior to campus arrival delayed the timing and reduced the size of the peak, while delaying the time until isolation bed capacity was reached. Testing twice in conjunction with effective mitigation strategies (R0=1.5) was the only scenario that did not exhaust isolation bed capacity during the semester.Conclusions Presemester screening is necessary to avert early and large surges of active COVID-19 infections. Therefore, we recommend testing within 1 week prior to and on campus return. While this strategy is sufficient for delaying the timing of the peak outbreak, presemester testing would need to be implemented in conjunction with effective mitigation strategies to significantly reduce outbreak size and preserve isolation bed capacity

    Accounting for confounding by time, early intervention adoption, and time-varying effect modification in the design and analysis of stepped-wedge designs: application to a proposed study design to reduce opioid-related mortality

    No full text
    Abstract Background Beginning in 2019, stepped-wedge designs (SWDs) were being used in the investigation of interventions to reduce opioid-related deaths in communities across the United States. However, these interventions are competing with external factors such as newly initiated public policies limiting opioid prescriptions, media awareness campaigns, and the COVID-19 pandemic. Furthermore, control communities may prematurely adopt components of the intervention as they become available. The presence of time-varying external factors that impact study outcomes is a well-known limitation of SWDs; common approaches to adjusting for them make use of a mixed effects modeling framework. However, these models have several shortcomings when external factors differentially impact intervention and control clusters. Methods We discuss limitations of commonly used mixed effects models in the context of proposed SWDs to investigate interventions intended to reduce opioid-related mortality, and propose extensions of these models to address these limitations. We conduct an extensive simulation study of anticipated data from SWD trials targeting the current opioid epidemic in order to examine the performance of these models in the presence of external factors. We consider confounding by time, premature adoption of intervention components, and time-varying effect modification— in which external factors differentially impact intervention and control clusters. Results In the presence of confounding by time, commonly used mixed effects models yield unbiased intervention effect estimates, but can have inflated Type 1 error and result in under coverage of confidence intervals. These models yield biased intervention effect estimates when premature intervention adoption or effect modification are present. In such scenarios, models incorporating fixed intervention-by-time interactions with an unstructured covariance for intervention-by-cluster-by-time random effects result in unbiased intervention effect estimates, reach nominal confidence interval coverage, and preserve Type 1 error. Conclusions Mixed effects models can adjust for different combinations of external factors through correct specification of fixed and random time effects. Since model choice has considerable impact on validity of results and study power, careful consideration must be given to how these external factors impact study endpoints and what estimands are most appropriate in the presence of such factors

    The impact of phased university reopenings on mitigating the spread of COVID-19: a modeling study

    No full text
    Abstract Background Several American universities have experienced COVID-19 outbreaks, risking the health of their students, employees, and local communities. Such large outbreaks have drained university resources and forced several institutions to shift to remote learning and send students home, further contributing to community disease spread. Many of these outbreaks can be attributed to the large numbers of active infections returning to campus, alongside high-density social events that typically take place at the semester start. In the absence of effective mitigation measures (e.g., high-frequency testing), a phased return of students to campus is a practical intervention to minimize the student population size and density early in the semester, reduce outbreaks, preserve institutional resources, and ultimately help mitigate disease spread in communities. Methods We develop dynamic compartmental SARS-CoV-2 transmission models to assess the impact of a phased reopening, in conjunction with pre-arrival testing, on minimizing on-campus outbreaks and preserving university resources (measured by isolation bed capacity). We assumed an on-campus population of N = 7500, 40% of infected students require isolation, 10 day isolation period, pre-arrival testing removes 90% of incoming infections, and that phased reopening returns one-third of the student population to campus each month. We vary the disease reproductive number (Rt) between 1.5 and 3.5 to represent the effectiveness of alternative mitigation strategies throughout the semester. Results Compared to pre-arrival testing only or neither intervention, phased reopening with pre-arrival testing reduced peak active infections by 3 and 22% (Rt = 1.5), 22 and 29% (Rt = 2.5), 41 and 45% (Rt = 3.5), and 54 and 58% (improving Rt), respectively. Required isolation bed capacity decreased between 20 and 57% for values of Rt ≥ 2.5. Conclusion Unless highly effective mitigation measures are in place, a reopening with pre-arrival testing substantially reduces peak number of active infections throughout the semester and preserves university resources compared to the simultaneous return of all students to campus. Phased reopenings allow institutions to ensure sufficient resources are in place, improve disease mitigation strategies, or if needed, preemptively move online before the return of additional students to campus, thus preventing unnecessary harm to students, institutional faculty and staff, and local communities

    The Association between Food Security Status and the Home Food Environment among a Sample of Rural South Carolina Residents

    No full text
    Prior research suggests that food security status may have an effect on the home food environment. Further, the literature suggests that food access factors may function to influence said relationship. The purpose of this research is to fill a gap in the literature on this relationship, as well as to identify potential food access effect modifiers. This research employs linear mixed effects modeling with a random intercept variable (zip codes). Eleven food access variables are included in regression analyses and are tested as potential effect modifiers in the association between food security status and the home food environment. Food security status is significantly associated with the home food environment (95% CI = 0.1–1.38) in the unadjusted model. In the adjusted model, food pantry usage is found to be a significant effect modifier on the association between food security status and the home food environment. This research concludes that food security status has a significant but disparate effect on the home food environment depending on participant food pantry usage. Practical implications from this research would be for relevant stakeholders to potentially improve rural food pantry access in order to increase the home food environment among rural and food insecure populations

    Systematic examination of methodological inconsistency in operationalizing cognitive reserve and its impact on identifying predictors of late-life cognition

    No full text
    Abstract Background Cognitive reserve (CR) is the ability to maintain cognitive performance despite brain pathology. CR is built through lifecourse experiences (e.g., education) and is a key construct in promoting healthy aging. However, the operationalization of CR and its estimated association with late-life cognition varies. The purpose of this study was to systematically examine the operationalization of CR and the relationship between its operationalization and late-life cognition. Methods We performed a comprehensive review of experiences (proxies) used to operationalize CR. The review informed quantitative analyses using data from 1366 participants of the Memory and Aging Project to examine 1) relationships between proxies and 2) the relationship between operationalization and late-life cognition. We also conducted a factor analysis with all identified CR experiences to create a composite lifecourse CR score. Generalized linear mixed models examined the relationship between operationalizations and global cognition, with secondary outcomes of five domains of cognition to examine consistency. Results Based on a review of 753 articles, we found the majority (92.3%) of the 28 commonly used proxies have weak to no correlation between one another. There was substantial variability in the association between operationalizations and late-life global cognition (median effect size: 0.99, IQR: 0.34 to 1.39). There was not strong consistency in the association between CR operationalizations and the five cognitive domains (mean consistency: 56.1%). The average estimate for the 28 operationalizations was 0.91 (SE = 0.48), compared to 2.48 (SE = 0.40) for the lifecourse score and it was associated with all five domains of cognition. Conclusions Inconsistent methodology is theorized as a major limitation of CR research and barrier to identification of impactful experiences for healthy cognitive aging. Based on the weak associations, it is not surprising that the relationship between CR and late-life cognition is dependent on the experience used to operationalize CR. Scores using multiple experiences across the lifecourse may help overcome such limitations. Adherence to a lifecourse approach and collaborative movement towards a consensus operationalization of CR are imperative shifts in the study of CR that can better inform research on risk factors related to cognitive decline and ultimately aid in the promotion of healthy aging
    corecore